2 research outputs found

    Meta-Learning the Inductive Biases of Simple Neural Circuits

    Full text link
    Animals receive noisy and incomplete information, from which we must learn how to react in novel situations. A fundamental problem is that training data is always finite, making it unclear how to generalise to unseen data. But, animals do react appropriately to unseen data, wielding Occam's razor to select a parsimonious explanation of the observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals' neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. In this work we present a neural network tool to bridge this gap. The tool allows us to meta-learn the inductive bias of neural circuits by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. We show that in systems where the inductive bias is known analytically, i.e. linear and kernel regression, our tool recovers it. Then, we show it is able to flexibly extract inductive biases from differentiable circuits, including spiking neural networks, and use it to interpret recent connectomic data through their effect on generalisation. This illustrates the intended use of our tool: understanding the role of otherwise opaque pieces of neural functionality through the inductive bias they induce.Comment: 15 pages, 11 figure

    Meta-Learning the Inductive Bias of Simple Neural Circuits

    Get PDF
    corecore